We discuss the process of building semantic maps, how to interactively labelentities in it, and use them to enable new navigation behaviors for specificscenarios. We utilize planar surfaces such as walls and tables, and staticobjects such as door signs as features to our semantic mapping approach. Userscan interactively annotate these features by having the robot follow him/her,entering the label through a mobile app and performing a pointing gesturetoward the landmark of interest. These landmarks can later be used to generatecontext-aware motions. Our pointing gesture approach can reliably estimate thetarget object using human joint positions and detect ambiguous gestures withprobabilistic modeling. Our person following method attempts to maximize futureutility by searching future actions, assuming constant velocity model for thehuman. We describe a simple method to extract metric goals from a semantic maplandmark and present a human aware path planner that considers the personalspaces of people to generate socially-aware paths. Finally, we demonstratecontext-awareness for person following in two scenarios: interactive labelingand door passing. We believe as the sensing technology improves and maps withricher semantic information becomes commonplace, it would create newopportunities for intelligent navigation algorithms.
展开▼